1,629 research outputs found

    On Using Physical Analogies for Feature and Shape Extraction in Computer Vision

    No full text
    There is a rich literature of approaches to image feature extraction in computer vision. Many sophisticated approaches exist for low- and high-level feature extraction but can be complex to implement with parameter choice guided by experimentation, but impeded by speed of computation. We have developed new ways to extract features based on notional use of physical paradigms, with parameterisation that is more familiar to a scientifically-trained user, aiming to make best use of computational resource. We describe how analogies based on gravitational force can be used for low-level analysis, whilst analogies of water flow and heat can be deployed to achieve high-level smooth shape detection. These new approaches to arbitrary shape extraction are compared with standard state-of-art approaches by curve evolution. There is no comparator operator to our use of gravitational force. We also aim to show that the implementation is consistent with the original motivations for these techniques and so contend that the exploration of physical paradigms offers a promising new avenue for new approaches to feature extraction in computer vision

    On Using Physical Analogies for Feature and Shape Extraction in Computer Vision

    No full text
    There is a rich literature of approaches to image feature extraction in computer vision. Many sophisticated approaches exist for low- and for high-level feature extraction but can be complex to implement with parameter choice guided by experimentation, but with performance analysis and optimization impeded by speed of computation. We have developed new feature extraction techniques on notional use of physical paradigms, with parametrization aimed to be more familiar to a scientifically trained user, aiming to make best use of computational resource. This paper is the first unified description of these new approaches, outlining the basis and results that can be achieved. We describe how gravitational force can be used for low-level analysis, while analogies of water flow and heat can be deployed to achieve high-level smooth shape detection, by determining features and shapes in a selection of images, comparing results with those by stock approaches from the literature. We also aim to show that the implementation is consistent with the original motivations for these techniques and so contend that the exploration of physical paradigms offers a promising new avenue for new approaches to feature extraction in computer vision

    TREEOME: A framework for epigenetic and transcriptomic data integration to explore regulatory interactions controlling transcription

    Get PDF
    Motivation: Predictive modelling of gene expression is a powerful framework for the in silico exploration of transcriptional regulatory interactions through the integration of high-throughput -omics data. A major limitation of previous approaches is their inability to handle conditional and synergistic interactions that emerge when collectively analysing genes subject to different regulatory mechanisms. This limitation reduces overall predictive power and thus the reliability of downstream biological inference. Results: We introduce an analytical modelling framework (TREEOME: tree of models of expression) that integrates epigenetic and transcriptomic data by separating genes into putative regulatory classes. Current predictive modelling approaches have found both DNA methylation and histone modification epigenetic data to provide little or no improvement in accuracy of prediction of transcript abundance despite, for example, distinct anti-correlation between mRNA levels and promoter-localised DNA methylation. To improve on this, in TREEOME we evaluate four possible methods of formulating gene-level DNA methylation metrics, which provide a foundation for identifying gene-level methylation events and subsequent differential analysis, whereas most previous techniques operate at the level of individual CpG dinucleotides. We demonstrate TREEOME by integrating gene-level DNA methylation (bisulfite-seq) and histone modification (ChIP-seq) data to accurately predict genome-wide mRNA transcript abundance (RNA-seq) for H1-hESC and GM12878 cell lines. Availability: TREEOME is implemented using open-source software and made available as a pre-configured bootable reference environment. All scripts and data presented in this study are available online at http://sourceforge.net/projects/budden2015treeome/.Comment: 14 pages, 6 figure

    Force field feature extraction for ear biometrics

    No full text
    The overall objective in defining feature space is to reduce the dimensionality of the original pattern space, whilst maintaining discriminatory power for classification. To meet this objective in the context of ear biometrics a new force field transformation treats the image as an array of mutually attracting particles that act as the source of a Gaussian force field. Underlying the force field there is a scalar potential energy field, which in the case of an ear takes the form of a smooth surface that resembles a small mountain with a number of peaks joined by ridges. The peaks correspond to potential energy wells and to extend the analogy the ridges correspond to potential energy channels. Since the transform also turns out to be invertible, and since the surface is otherwise smooth, information theory suggests that much of the information is transferred to these features, thus confirming their efficacy. We previously described how field line feature extraction, using an algorithm similar to gradient descent, exploits the directional properties of the force field to automatically locate these channels and wells, which then form the basis of characteristic ear features. We now show how an analysis of the mechanism of this algorithmic approach leads to a closed analytical description based on the divergence of force direction, which reveals that channels and wells are really manifestations of the same phenomenon. We further show that this new operator, with its own distinct advantages, has a striking similarity to the Marr-Hildreth operator, but with the important difference that it is non-linear. As well as addressing faster implementation, invertibility, and brightness sensitivity, the technique is also validated by performing recognition on a database of ears selected from the XM2VTS face database, and by comparing the results with the more established technique of Principal Components Analysis. This confirms not only that ears do indeed appear to have potential as a biometric, but also that the new approach is well suited to their description, being robust especially in the presence of noise, and having the advantage that the ear does not need to be explicitly extracted from the background

    SGR 1806-20 Is a Set of Independent Relaxation Systems

    Get PDF
    The Soft Gamma Repeater 1806-20 produced patterns of bursts during its 1983 outburst that indicate multiple independent energy accumulation sites, each driven by a continuous power source, with sudden, incomplete releases of the accumulated energy. The strengths of the power sources and their durations of activity vary over several orders of magnitude.Comment: Accepted ApJLett, 15 pages, 3 figure

    Transmission measurements through liquids in a white cell

    Get PDF
    The suitability of using the White cell for determining the extinction coefficient of liquids was examined by measuring the attenuation of electromagnetic radiation, in the visible part of the spectrum, through distilled water. The analysis of the transmittance measurements through liquids in a White cell does not provide a sufficient number of independent equations to solve directly for the extinction coefficient. Reasonable estimates however can be made by employing a correction which accounts for changes in the mirror reflectance due to a liquid-mirror interface. The reduction in the transmittance of distilled water due to an oil film was studied for fuel oils number 2 and 3 and midwestern crude oil. Reduction depends on the thickness and the extinction coefficient of the oil. It was found that a small film of crude oil, approximately six thousandths of an inch thick, can reduce the transmittance to almost zero in the range where the water\u27s transmittance is a maximum (0.45µ - 0.50µ). Errors resulting from using transmittance measurements to determine the extinction coefficient of liquids were examined. The analysis reveals that transmittance measurements through two cells which differ in their optical path by 1.3 will yield the highest level of accuracy --Abstract, page ii

    Preferences over the Fair Division of Goods: Information, Good, and Sample Effects in a Health Context

    Get PDF
    Greater recognition by economists of the influential role that concern for distributional equity exerts on decision making in a variety of economic contexts has spurred interest in empirical research on the public judgments of fair distribution. Using a stated-preference experimental design, this paper contributes to the growing literature on fair division by investigating the empirical support for each of five distributional principles — equal division among recipients, Rawlsian maximin, total benefit maximization, equal benefit for recipients, and allocation according to relative need among recipients — in the division of a fixed bundle of a good across settings that differ with respect to the good being allocated (a health care good — pills, and non-health care but still health-affecting good — apples) and the way that alternative possible divisions of the good are described (quantitative information only, verbal information only, and both). It also offers new evidence on sample effects (university sample vs. community samples) and how the aggregate ranking of principles is affected by alternative vote-scoring methods. We find important information effects. When presented with quantitative information only, support for the division to equalize benefit across recipients is consistent with that found in previous research; changing to verbal descriptions causes a notable shift in support among principles, especially between equal division of the goods and total benefit maximization. The judgments made when presented with both quantitative and verbal information match more closely those made with quantitative-only descriptions rather than verbal-only descriptions, suggesting that the quantitative information dominates. The information effects we observe are consistent with a lack of understanding among participants as to the relationship between the principles and the associated quantitative allocations. We also find modest good effects in the expected direction: the fair division of pills is tied more closely to benefit-related criterion than is the fair division of apples (even though both produce health benefits). We find evidence of only small differences between the university and community samples and important sex-information interactions.Distributive justice, equity, resource allocation, health care

    Teaching with the Framework: a Cephalonian Approach

    Get PDF
    Purpose This paper aims to provide academic instruction librarians with a model for integrating concepts from the Association of College and Research Libraries (ACRL) Framework into “one-shot” library instruction sessions without losing the practical experience of searching the library resources. Design/methodology/approach The authors adapted the Cephalonian method as the structure of first-year library instruction sessions for an English composition class. The sessions were re-designed to emphasize the core concepts of information literacy while incorporating active learning activities and discussion. Findings The authors found the Cephalonian method to be a useful structure for incorporating aspects of the ACRL Framework into the first-year library instruction program. The call-and-response format fosters conversations and leads seamlessly into hands-on activities. When used as part of “flipped” instruction, the Cephalonian method allows instructors to engage students who have completed the online portion and those who have not. Practical implications This paper offers librarians practical ideas for incorporating the information literacy concepts outlined in the ACRL Framework into one-shot instruction sessions. Originality/value With the recent adoption of the Framework for Information Literacy for Higher Education by ACRL, there is a need for practical examples of how to incorporate the frames into existing library instruction programs

    \u3cem\u3eN\u3c/em\u3e-acetylcysteine Decreases Binge Eating in a Rodent Model

    Get PDF
    Binge-eating behavior involves rapid consumption of highly palatable foods leading to increased weight gain. Feeding in binge disorders resembles other compulsive behaviors, many of which are responsive to N-acetylcysteine (NAC), which is a cysteine prodrug often used to promote non-vesicular glutamate release by a cystine–glutamate antiporter. To examine the potential for NAC to alter a form of compulsive eating, we examined the impact of NAC on binge eating in a rodent model. Specifically, we monitored consumption of standard chow and a high-fat, high carbohydrate western diet (WD) in a rodent limited-access binge paradigm. Before each session, rats received either a systemic or intraventricular injection of NAC. Both systemic and central administration of NAC resulted in significant reductions of binge eating the WD without decreasing standard chow consumption. The reduction in WD was not attributable to general malaise as NAC did not produce condition taste aversion. These results are consistent with the clinical evidence of NAC to reduce or reverse compulsive behaviors, such as, drug addiction, skin picking and hair pulling
    corecore